Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20.210
Filtrar
1.
Biomed Phys Eng Express ; 10(3)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38631317

RESUMO

Introduction. The currently available dosimetry techniques in computed tomography can be inaccurate which overestimate the absorbed dose. Therefore, we aimed to provide an automated and fast methodology to more accurately calculate the SSDE usingDwobtained by using CNN from thorax and abdominal CT study images.Methods. The SSDE was determined from the 200 records files. For that purpose, patients' size was measured in two ways: (a) by developing an algorithm following the AAPM Report No. 204 methodology; and (b) using a CNN according to AAPM Report No. 220.Results. The patient's size measured by the in-house software in the region of thorax and abdomen was 27.63 ± 3.23 cm and 28.66 ± 3.37 cm, while CNN was 18.90 ± 2.6 cm and 21.77 ± 2.45 cm. The SSDE in thorax according to 204 and 220 reports were 17.26 ± 2.81 mGy and 23.70 ± 2.96 mGy for women and 17.08 ± 2.09 mGy and 23.47 ± 2.34 mGy for men. In abdomen was 18.54 ± 2.25 mGy and 23.40 ± 1.88 mGy in women and 18.37 ± 2.31 mGy and 23.84 ± 2.36 mGy in men.Conclusions. Implementing CNN-based automated methodologies can contribute to fast and accurate dose calculations, thereby improving patient-specific radiation safety in clinical practice.


Assuntos
Algoritmos , Doses de Radiação , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Masculino , Feminino , Tamanho Corporal , Redes Neurais de Computação , Software , Automação , Tórax/diagnóstico por imagem , Adulto , Abdome/diagnóstico por imagem , Radiometria/métodos , Radiografia Torácica/métodos , Pessoa de Meia-Idade , Processamento de Imagem Assistida por Computador/métodos , Radiografia Abdominal/métodos , Idoso
2.
Nat Commun ; 15(1): 3447, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38658554

RESUMO

Achieving cost-competitive bio-based processes requires development of stable and selective biocatalysts. Their realization through in vitro enzyme characterization and engineering is mostly low throughput and labor-intensive. Therefore, strategies for increasing throughput while diminishing manual labor are gaining momentum, such as in vivo screening and evolution campaigns. Computational tools like machine learning further support enzyme engineering efforts by widening the explorable design space. Here, we propose an integrated solution to enzyme engineering challenges whereby ML-guided, automated workflows (including library generation, implementation of hypermutation systems, adapted laboratory evolution, and in vivo growth-coupled selection) could be realized to accelerate pipelines towards superior biocatalysts.


Assuntos
Biocatálise , Engenharia de Proteínas , Engenharia de Proteínas/métodos , Enzimas/metabolismo , Enzimas/genética , Enzimas/química , Aprendizado de Máquina , Evolução Molecular Direcionada/métodos , Automação , Biblioteca Gênica
3.
Comput Methods Programs Biomed ; 249: 108141, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38574423

RESUMO

BACKGROUND AND OBJECTIVE: Lung tumor annotation is a key upstream task for further diagnosis and prognosis. Although deep learning techniques have promoted automation of lung tumor segmentation, there remain challenges impeding its application in clinical practice, such as a lack of prior annotation for model training and data-sharing among centers. METHODS: In this paper, we use data from six centers to design a novel federated semi-supervised learning (FSSL) framework with dynamic model aggregation and improve segmentation performance for lung tumors. To be specific, we propose a dynamically updated algorithm to deal with model parameter aggregation in FSSL, which takes advantage of both the quality and quantity of client data. Moreover, to increase the accessibility of data in the federated learning (FL) network, we explore the FAIR data principle while the previous federated methods never involve. RESULT: The experimental results show that the segmentation performance of our model in six centers is 0.9348, 0.8436, 0.8328, 0.7776, 0.8870 and 0.8460 respectively, which is superior to traditional deep learning methods and recent federated semi-supervised learning methods. CONCLUSION: The experimental results demonstrate that our method is superior to the existing FSSL methods. In addition, our proposed dynamic update strategy effectively utilizes the quality and quantity information of client data and shows efficiency in lung tumor segmentation. The source code is released on (https://github.com/GDPHMediaLab/FedDUS).


Assuntos
Algoritmos , Neoplasias Pulmonares , Humanos , Automação , Neoplasias Pulmonares/diagnóstico por imagem , Software , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
4.
Radiographics ; 44(5): e230067, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38635456

RESUMO

Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI. Published under a CC BY 4.0 license. Test Your Knowledge questions for this article are available in the supplemental material. See the invited commentary by Rouzrokh and Erickson in this issue.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Automação , Aprendizado de Máquina , Viés
5.
Nutrients ; 16(7)2024 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-38613106

RESUMO

In industry 4.0, where the automation and digitalization of entities and processes are fundamental, artificial intelligence (AI) is increasingly becoming a pivotal tool offering innovative solutions in various domains. In this context, nutrition, a critical aspect of public health, is no exception to the fields influenced by the integration of AI technology. This study aims to comprehensively investigate the current landscape of AI in nutrition, providing a deep understanding of the potential of AI, machine learning (ML), and deep learning (DL) in nutrition sciences and highlighting eventual challenges and futuristic directions. A hybrid approach from the systematic literature review (SLR) guidelines and the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines was adopted to systematically analyze the scientific literature from a search of major databases on artificial intelligence in nutrition sciences. A rigorous study selection was conducted using the most appropriate eligibility criteria, followed by a methodological quality assessment ensuring the robustness of the included studies. This review identifies several AI applications in nutrition, spanning smart and personalized nutrition, dietary assessment, food recognition and tracking, predictive modeling for disease prevention, and disease diagnosis and monitoring. The selected studies demonstrated the versatility of machine learning and deep learning techniques in handling complex relationships within nutritional datasets. This study provides a comprehensive overview of the current state of AI applications in nutrition sciences and identifies challenges and opportunities. With the rapid advancement in AI, its integration into nutrition holds significant promise to enhance individual nutritional outcomes and optimize dietary recommendations. Researchers, policymakers, and healthcare professionals can utilize this research to design future projects and support evidence-based decision-making in AI for nutrition and dietary guidance.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Aprendizado de Máquina , Estado Nutricional , Automação
6.
Anal Chem ; 96(16): 6282-6291, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38595038

RESUMO

Respiratory tract infections (RTIs) pose a grave threat to human health, with bacterial pathogens being the primary culprits behind severe illness and mortality. In response to the pressing issue, we developed a centrifugal microfluidic chip integrated with a recombinase-aided amplification (RAA)-clustered regularly interspaced short palindromic repeats (CRISPR) system to achieve rapid detection of respiratory pathogens. The limitations of conventional two-step CRISPR-mediated systems were effectively addressed by employing the all-in-one RAA-CRISPR detection method, thereby enhancing the accuracy and sensitivity of bacterial detection. Moreover, the integration of a centrifugal microfluidic chip led to reduced sample consumption and significantly improved the detection throughput, enabling the simultaneous detection of multiple respiratory pathogens. Furthermore, the incorporation of Chelex-100 in the sample pretreatment enabled a sample-to-answer capability. This pivotal addition facilitated the deployment of the system in real clinical sample testing, enabling the accurate detection of 12 common respiratory bacteria within a set of 60 clinical samples. The system offers rapid and reliable results that are crucial for clinical diagnosis, enabling healthcare professionals to administer timely and accurate treatment interventions to patients.


Assuntos
Infecções Respiratórias , Infecções Respiratórias/diagnóstico , Infecções Respiratórias/microbiologia , Humanos , Técnicas Analíticas Microfluídicas/instrumentação , Dispositivos Lab-On-A-Chip , Técnicas de Amplificação de Ácido Nucleico , Repetições Palindrômicas Curtas Agrupadas e Regularmente Espaçadas/genética , Bactérias/isolamento & purificação , Bactérias/genética , Recombinases/metabolismo , Automação , Infecções Bacterianas/diagnóstico
7.
Cogn Res Princ Implic ; 9(1): 21, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38598036

RESUMO

The use of partially-automated systems require drivers to supervise the system functioning and resume manual control whenever necessary. Yet literature on vehicle automation show that drivers may spend more time looking away from the road when the partially-automated system is operational. In this study we answer the question of whether this pattern is a manifestation of inattentional blindness or, more dangerously, it is also accompanied by a greater attentional processing of the driving scene. Participants drove a simulated vehicle in manual or partially-automated mode. Fixations were recorded by means of a head-mounted eye-tracker. A surprise two-alternative forced-choice recognition task was administered at the end of the data collection whereby participants were quizzed on the presence of roadside billboards that they encountered during the two drives. Data showed that participants were more likely to fixate and recognize billboards when the automated system was operational. Furthermore, whereas fixations toward billboards decreased toward the end of the automated drive, the performance in the recognition task did not suffer. Based on these findings, we hypothesize that the use of the partially-automated driving system may result in an increase in attention allocation toward peripheral objects in the road scene which is detrimental to the drivers' ability to supervise the automated system and resume manual control of the vehicle.


Assuntos
Cegueira , Transtornos Mentais , Humanos , Automação , Coleta de Dados , Reconhecimento Psicológico
8.
Cogn Res Princ Implic ; 9(1): 20, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589710

RESUMO

In service of the goal of examining how cognitive science can facilitate human-computer interactions in complex systems, we explore how cognitive psychology research might help educators better utilize artificial intelligence and AI supported tools as facilitatory to learning, rather than see these emerging technologies as a threat. We also aim to provide historical perspective, both on how automation and technology has generated unnecessary apprehension over time, and how generative AI technologies such as ChatGPT are a product of the discipline of cognitive science. We introduce a model for how higher education instruction can adapt to the age of AI by fully capitalizing on the role that metacognition knowledge and skills play in determining learning effectiveness. Finally, we urge educators to consider how AI can be seen as a critical collaborator to be utilized in our efforts to educate around the critical workforce skills of effective communication and collaboration.


Assuntos
Inteligência Artificial , Psicologia Cognitiva , Humanos , Automação , Ciência Cognitiva , Aprendizagem
9.
Sensors (Basel) ; 24(7)2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38610583

RESUMO

Due to the global population increase and the recovery of agricultural demand after the COVID-19 pandemic, the importance of agricultural automation and autonomous agricultural vehicles is growing. Fallen person detection is critical to preventing fatal accidents during autonomous agricultural vehicle operations. However, there is a challenge due to the relatively limited dataset for fallen persons in off-road environments compared to on-road pedestrian datasets. To enhance the generalization performance of fallen person detection off-road using object detection technology, data augmentation is necessary. This paper proposes a data augmentation technique called Automated Region of Interest Copy-Paste (ARCP) to address the issue of data scarcity. The technique involves copying real fallen person objects obtained from public source datasets and then pasting the objects onto a background off-road dataset. Segmentation annotations for these objects are generated using YOLOv8x-seg and Grounded-Segment-Anything, respectively. The proposed algorithm is then applied to automatically produce augmented data based on the generated segmentation annotations. The technique encompasses segmentation annotation generation, Intersection over Union-based segment setting, and Region of Interest configuration. When the ARCP technique is applied, significant improvements in detection accuracy are observed for two state-of-the-art object detectors: anchor-based YOLOv7x and anchor-free YOLOv8x, showing an increase of 17.8% (from 77.8% to 95.6%) and 12.4% (from 83.8% to 96.2%), respectively. This suggests high applicability for addressing the challenges of limited datasets in off-road environments and is expected to have a significant impact on the advancement of object detection technology in the agricultural industry.


Assuntos
Agricultura , Pandemias , Humanos , Tecnologia , Algoritmos , Automação
10.
J Robot Surg ; 18(1): 102, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38427094

RESUMO

Artificial intelligence (AI) is revolutionizing nearly every aspect of modern life. In the medical field, robotic surgery is the sector with some of the most innovative and impactful advancements. In this narrative review, we outline recent contributions of AI to the field of robotic surgery with a particular focus on intraoperative enhancement. AI modeling is allowing surgeons to have advanced intraoperative metrics such as force and tactile measurements, enhanced detection of positive surgical margins, and even allowing for the complete automation of certain steps in surgical procedures. AI is also Query revolutionizing the field of surgical education. AI modeling applied to intraoperative surgical video feeds and instrument kinematics data is allowing for the generation of automated skills assessments. AI also shows promise for the generation and delivery of highly specialized intraoperative surgical feedback for training surgeons. Although the adoption and integration of AI show promise in robotic surgery, it raises important, complex ethical questions. Frameworks for thinking through ethical dilemmas raised by AI are outlined in this review. AI enhancements in robotic surgery is some of the most groundbreaking research happening today, and the studies outlined in this review represent some of the most exciting innovations in recent years.


Assuntos
Inteligência Artificial , Procedimentos Cirúrgicos Robóticos , Humanos , Automação , Benchmarking , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgiões
11.
PLoS One ; 19(3): e0299456, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38452131

RESUMO

Continual technological advances associated with the recent automation revolution have tremendously increased the impact of computer technology in the industry. Software development and testing are time-consuming processes, and the current market faces a lack of specialized experts. Introducing automation to this field could, therefore, improve software engineers' common workflow and decrease the time to market. Even though many code-generating algorithms have been proposed in textual-based programming languages, to the best of the authors' knowledge, none of the studies deals with the implementation of such algorithms in graphical programming environments, especially LabVIEW. Due to this fact, the main goal of this study is to conduct a proof-of-concept for a requirement-based automated code-developing system within the graphical programming environment LabVIEW. The proposed framework was evaluated on four basic benchmark problems, encompassing a string model, a numeric model, a boolean model and a mixed-type problem model, which covers fundamental programming scenarios. In all tested cases, the algorithm demonstrated an ability to create satisfying functional and errorless solutions that met all user-defined requirements. Even though the generated programs were burdened with redundant objects and were much more complex compared to programmer-developed codes, this fact has no effect on the code's execution speed or accuracy. Based on the achieved results, we can conclude that this pilot study not only proved the feasibility and viability of the proposed concept, but also showed promising results in solving linear and binary programming tasks. Furthermore, the results revealed that with further research, this poorly explored field could become a powerful tool not only for application developers but also for non-programmers and low-skilled users.


Assuntos
Linguagens de Programação , Software , Projetos Piloto , Algoritmos , Automação
13.
Methods Mol Biol ; 2760: 393-412, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38468100

RESUMO

Genetic design automation (GDA) is the use of computer-aided design (CAD) in designing genetic networks. GDA tools are necessary to create more complex synthetic genetic networks in a high-throughput fashion. At the core of these tools is the abstraction of a hierarchy of standardized components. The components' input, output, and interactions must be captured and parametrized from relevant experimental data. Simulations of genetic networks should use those parameters and include the experimental context to be compared with the experimental results.This chapter introduces Logical Operators for Integrated Cell Algorithms (LOICA), a Python package used for designing, modeling, and characterizing genetic networks using a simple object-oriented design abstraction. LOICA represents different biological and experimental components as classes that interact to generate models. These models can be parametrized by direct connection to the Flapjack experimental data management platform to characterize abstracted components with experimental data. The models can be simulated using stochastic simulation algorithms or ordinary differential equations with varying noise levels. The simulated data can be managed and published using Flapjack alongside experimental data for comparison. LOICA genetic network designs can be represented as graphs and plotted as networks for visual inspection and serialized as Python objects or in the Synthetic Biology Open Language (SBOL) format for sharing and use in other designs.


Assuntos
Linguagens de Programação , Software , Redes Reguladoras de Genes , Algoritmos , Biologia Sintética/métodos , Automação
14.
J Chem Inf Model ; 64(8): 3059-3079, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38498942

RESUMO

Condensing the many physical variables defining a chemical system into a fixed-size array poses a significant challenge in the development of chemical Machine Learning (ML). Atom Centered Symmetry Functions (ACSFs) offer an intuitive featurization approach by means of a tedious and labor-intensive selection of tunable parameters. In this work, we implement an unsupervised ML strategy relying on a Gaussian Mixture Model (GMM) to automatically optimize the ACSF parameters. GMMs effortlessly decompose the vastness of the chemical and conformational spaces into well-defined radial and angular clusters, which are then used to build tailor-made ACSFs. The unsupervised exploration of the space has demonstrated general applicability across a diverse range of systems, spanning from various unimolecular landscapes to heterogeneous databases. The impact of the sampling technique and temperature on space exploration is also addressed, highlighting the particularly advantageous role of high-temperature Molecular Dynamics (MD) simulations. The reliability of the resulting features is assessed through the estimation of the atomic charges of a prototypical capped amino acid and a heterogeneous collection of CHON molecules. The automatically constructed ACSFs serve as high-quality descriptors, consistently yielding typical prediction errors below 0.010 electrons bound for the reported atomic charges. Altering the spatial distribution of the functions with respect to the cluster highlights the critical role of symmetry rupture in achieving significantly improved features. More specifically, using two separate functions to describe the lower and upper tails of the cluster results in the best performing models with errors as low as 0.006 electrons. Finally, the effectiveness of finely tuned features was checked across different architectures, unveiling the superior performance of Gaussian Process (GP) models over Feed Forward Neural Networks (FFNNs), particularly in low-data regimes, with nearly a 2-fold increase in prediction quality. Altogether, this approach paves the way toward an easier construction of local chemical descriptors, while providing valuable insights into how radial and angular spaces should be mapped. Finally, this work opens the possibility of encoding many-body information beyond angular terms into upcoming ML features.


Assuntos
Simulação de Dinâmica Molecular , Aprendizado de Máquina não Supervisionado , Distribuição Normal , Automação
15.
Anal Bioanal Chem ; 416(12): 2983-2993, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38556595

RESUMO

Liquid chromatography (LC) or gas chromatography (GC) coupled to high-resolution mass spectrometry (HRMS) is a versatile analytical method for the analysis of thousands of chemical pollutants that can be found in environmental and biological samples. While the tools for handling such complex datasets have improved, there are still no fully automated workflows for targeted screening analysis. Here we present an R-based workflow that is able to cope with challenging data like noisy ion chromatograms, retention time shifts, and multiple peak patterns. The workflow can be applied to batches of HRMS data recorded after GC with electron ionization (GC-EI) and LC coupled to electrospray ionization in both negative and positive mode (LC-ESIneg/LC-ESIpos) to perform peak annotation and quantitation fully unsupervised. We used Orbitrap HRMS data of surface water extracts to compare the Automated Target Screening (ATS) workflow with data evaluations performed with the vendor software TraceFinder and the established semi-automated analysis workflow in the MZmine software. The ATS approach increased the overall evaluation performance of the peak annotation compared to the established MZmine module without the need for any post-hoc corrections. The overall accuracy increased from 0.80 to 0.86 (LC-ESIpos), from 0.77 to 0.83 (LC-ESIneg), and from 0.67 to 0.76 (GC-EI). The mean average percentage errors for quantification of ATS were around 30% compared to the manual quantification with TraceFinder. The ATS workflow enables time-efficient analysis of GC- and LC-HRMS data and accelerates and improves the applicability of target screening in studies with a large number of analytes and sample sizes without the need for manual intervention.


Assuntos
Fluxo de Trabalho , Espectrometria de Massas/métodos , Software , Automação , Cromatografia Líquida/métodos , Cromatografia Gasosa-Espectrometria de Massas/métodos , Poluentes Químicos da Água/análise
16.
Metabolomics ; 20(2): 41, 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38480600

RESUMO

BACKGROUND: The National Cancer Institute issued a Request for Information (RFI; NOT-CA-23-007) in October 2022, soliciting input on using and reusing metabolomics data. This RFI aimed to gather input on best practices for metabolomics data storage, management, and use/reuse. AIM OF REVIEW: The nuclear magnetic resonance (NMR) Interest Group within the Metabolomics Association of North America (MANA) prepared a set of recommendations regarding the deposition, archiving, use, and reuse of NMR-based and, to a lesser extent, mass spectrometry (MS)-based metabolomics datasets. These recommendations were built on the collective experiences of metabolomics researchers within MANA who are generating, handling, and analyzing diverse metabolomics datasets spanning experimental (sample handling and preparation, NMR/MS metabolomics data acquisition, processing, and spectral analyses) to computational (automation of spectral processing, univariate and multivariate statistical analysis, metabolite prediction and identification, multi-omics data integration, etc.) studies. KEY SCIENTIFIC CONCEPTS OF REVIEW: We provide a synopsis of our collective view regarding the use and reuse of metabolomics data and articulate several recommendations regarding best practices, which are aimed at encouraging researchers to strengthen efforts toward maximizing the utility of metabolomics data, multi-omics data integration, and enhancing the overall scientific impact of metabolomics studies.


Assuntos
Imageamento por Ressonância Magnética , Metabolômica , Metabolômica/métodos , Espectroscopia de Ressonância Magnética/métodos , Espectrometria de Massas/métodos , Automação
17.
Drug Metab Dispos ; 52(5): 377-389, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38438166

RESUMO

The determination of metabolic stability is critical for drug discovery programs, allowing for the optimization of chemical entities and compound prioritization. As such, it is common to perform high-volume in vitro metabolic stability experiments early in the lead optimization process to understand metabolic liabilities. Additional metabolite identification experiments are subsequently performed for a more comprehensive understanding of the metabolic clearance routes to aid medicinal chemists in the structural design of compounds. Collectively, these experiments require extensive sample preparation and a substantial amount of time and resources. To overcome the challenges, a high-throughput integrated assay for simultaneous hepatocyte metabolic stability assessment and metabolite profiling was developed. This assay platform consists of four parts: 1) an automated liquid-handling system for sample preparation and incubation, 2) a liquid chromatography and high-resolution mass spectrometry-based system to simultaneously monitor the parent compound depletion and metabolite formation, 3) an automated data analysis and report system for hepatic clearance assessment; and 4) streamlined autobatch processing for software-based metabolite profiling. The assay platform was evaluated using eight control compounds with various metabolic rates and biotransformation routes in hepatocytes across three species. Multiple sample preparation and data analysis steps were evaluated and validated for accuracy, repeatability, and metabolite coverage. The combined utility of an automated liquid-handling instrument, a high-resolution mass spectrometer, and multiple streamlined data processing software improves the process of these highly demanding screening assays and allows for simultaneous determination of metabolic stability and metabolite profiles for more efficient lead optimization during early drug discovery. SIGNIFICANCE STATEMENT: Metabolic stability assessment and metabolite profiling are pivotal in drug discovery to fully comprehend metabolic liabilities for chemical entity optimization and lead selection. Process of these assays can be repetitive and resource demanding. Here, we developed an integrated hepatocyte stability assay that combines automation, high-resolution mass spectrometers, and batch-processing software to improve and combine the workflow of these assays. The integrated approach allows simultaneous metabolic stability assessment and metabolite profiling, significantly accelerating screening and lead optimization in a resource-effective manner.


Assuntos
Hepatócitos , Software , Cromatografia Líquida/métodos , Espectrometria de Massas , Automação
18.
Accid Anal Prev ; 200: 107501, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38471236

RESUMO

Human drivers are gradually being replaced by highly automated driving systems, and this trend is expected to persist. The response of autonomous vehicles to Ambiguous Driving Scenarios (ADS) is crucial for legal and safety reasons. Our research focuses on establishing a robust framework for developing ADS in autonomous vehicles and classifying them based on AV user perceptions. To achieve this, we conducted extensive literature reviews, in-depth interviews with industry experts, a comprehensive questionnaire survey, and factor analysis. We created 28 diverse ambiguous driving scenarios and examined 548 AV users' perspectives on moral, ethical, legal, utility, and safety aspects. Based on the results, we grouped ADS, with all of them having the highest user perception of safety. We classified these scenarios where autonomous vehicles yield to others as moral, bottleneck scenarios as ethical, cross-over scenarios as legal, and scenarios where vehicles come to a halt as utility-related. Additionally, this study is expected to make a valuable contribution to the field of self-driving cars by presenting new perspectives on policy and algorithm development, aiming to improve the safety and convenience of autonomous driving.


Assuntos
Condução de Veículo , Humanos , Acidentes de Trânsito/prevenção & controle , Veículos Autônomos , Automação , Algoritmos
19.
Accid Anal Prev ; 200: 107537, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38471237

RESUMO

The use of partially-automated or SAE level-2 vehicles is expected to change the role of the human driver from operator to supervisor, which may have an effect on the driver's workload and visual attention. In this study, 30 Ontario drivers operated a vehicle in manual and partially-automated mode. Cognitive workload was measured by means of the Detection Response Task, and visual attention was measured by means of coding glances on and off the forward roadway. No difference in cognitive workload was found between driving modes. However, drivers spent less time glancing at the forward roadway, and more time glancing at the vehicle's touchscreen. These data add to our knowledge of how vehicle automation affects cognitive workload and attention allocation, and show potential safety risks associated with the adoption of partially-automated driving.


Assuntos
Condução de Veículo , Humanos , Condução de Veículo/psicologia , Acidentes de Trânsito , Tempo de Reação/fisiologia , Carga de Trabalho , Automação , Cognição
20.
Cogn Res Princ Implic ; 9(1): 17, 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38530617

RESUMO

Previous work has demonstrated similarities and differences between aerial and terrestrial image viewing. Aerial scene categorization, a pivotal visual processing task for gathering geoinformation, heavily depends on rotation-invariant information. Aerial image-centered research has revealed effects of low-level features on performance of various aerial image interpretation tasks. However, there are fewer studies of viewing behavior for aerial scene categorization and of higher-level factors that might influence that categorization. In this paper, experienced subjects' eye movements were recorded while they were asked to categorize aerial scenes. A typical viewing center bias was observed. Eye movement patterns varied among categories. We explored the relationship of nine image statistics to observers' eye movements. Results showed that if the images were less homogeneous, and/or if they contained fewer or no salient diagnostic objects, viewing behavior became more exploratory. Higher- and object-level image statistics were predictive at both the image and scene category levels. Scanpaths were generally organized and small differences in scanpath randomness could be roughly captured by critical object saliency. Participants tended to fixate on critical objects. Image statistics included in this study showed rotational invariance. The results supported our hypothesis that the availability of diagnostic objects strongly influences eye movements in this task. In addition, this study provides supporting evidence for Loschky et al.'s (Journal of Vision, 15(6), 11, 2015) speculation that aerial scenes are categorized on the basis of image parts and individual objects. The findings were discussed in relation to theories of scene perception and their implications for automation development.


Assuntos
Movimentos Oculares , Percepção Visual , Humanos , Estimulação Luminosa/métodos , Automação , Registros
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...